On universal quantization by randomized uniform/lattice quantizers

نویسندگان

  • Ram Zamir
  • Meir Feder
چکیده

Uniform quantization with dither, or lattice quantization with dither in the vector case, followed by a universal lossless source encoder (entropy coder), is a simple procedure for universal coding with distortion of a source that may take continuously many values. The rate of this universal coding scheme is examined, and we derive a general expression for it. An upper bound for the redundancy of this scheme, deened as the diierence between its rate and the minimal possible rate, given by the rate distortion function of the source, is derived. This bound holds for all distortion levels. Furthermore, we present a composite upper bound on the redundancy as a function of the quantizer resolution which leads to a tighter bound in the high rate (low distortion) case.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Asymptotic entropy-constrained performance of tessellating and universal randomized lattice quantization

TWO resuhs are given. First, using a result of Csiszlr, the asymptotic (i.e., high-resolution/Iow distortion) performance for entropyconstrained tessellating vector quantization, heuristically derived by Gersho, is proven for all sources with finite differential entropy. This implies, using Gersho’s conjecture and Zador’s formula, that tessellating vector quantizers are asymptotically optimal f...

متن کامل

An efficient approach to lattice-based fixed-rate entropy-coded vector quantization

In the absence of channel noise, variable-length quantizers perform better than xed rate Lloyd-Max quantizers for sources with a non-uniform density function. However, channel errors can lead to a loss of synchronization resulting in a propagation of error. To avoid having variable rate, one can use a vector quantizer selected as a subset of high probability points in the Cartesian product of a...

متن کامل

Universal Deep Neural Network Compression

Compression of deep neural networks (DNNs) for memoryand computation-efficient compact feature representations becomes a critical problem particularly for deployment of DNNs on resource-limited platforms. In this paper, we investigate lossy compression of DNNs by weight quantization and lossless source coding for memory-efficient inference. Whereas the previous work addressed non-universal scal...

متن کامل

Vector Set-Partitioning with Successive Refinement Voronoi Lattice VQ for Embedded Wavelet Image Coding

While lattice vector quantization (LVQ) can solve the complexity problem of LBG based vector quantizers, and also yield very general codebooks, a single stage lattice VQ, when applied to high variance vectors result in very large and unwieldy indices, making it unsuitable for applications requiring successive refinement. The goal of this work is to develop a unified framework for progressive un...

متن کامل

A Vector Quantization Approach to Universal Noiseless Coding and Quantization - Information Theory, IEEE Transactions on

A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may he noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to twostage coding, in which t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • IEEE Trans. Information Theory

دوره 38  شماره 

صفحات  -

تاریخ انتشار 1992